Maximum Roaming Multi-Task Learning
نویسندگان
چکیده
Multi-task learning has gained popularity due to the advantages it provides with respect resource usage and performance. Nonetheless, joint optimization of parameters multiple tasks remains an active research topic. Sub-partitioning between different proven be efficient way relax constraints over shared weights, may partitions disjoint or overlapping. However, one drawback this approach is that can weaken inductive bias generally set up by task optimization. In work, we present a novel partition parameter space without weakening bias. Specifically, propose Maximum Roaming, method inspired dropout randomly varies partitioning, while forcing them visit as many possible at regulated frequency, so network fully adapts each update. We study properties our through experiments on variety visual multi-task data sets. Experimental results suggest regularization brought roaming more impact performance than usual partitioning strategies. The overall flexible, easily applicable, superior consistently achieves improved performances compared recent formulations.
منابع مشابه
Learning Multi-Level Task Groups in Multi-Task Learning
In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information across them. Many MTL algorithms have been proposed to learn the underlying task groups. However, those methods are limited to learn the task groups at only a single level, which may be not sufficient to model the complex structure among tasks in many real-world applications. In this paper, we propos...
متن کاملMulti-Objective Multi-Task Learning
This dissertation presents multi-objective multi-task learning, a new learning framework. Given a fixed sequence of tasks, the learned hypothesis space must minimize multiple objectives. Since these objectives are often in conflict, we cannot find a single best solution, so we analyze a set of solutions. We first propose and analyze a new learning principle, empirically efficient learning. From...
متن کاملMulti-Task Multi-Sample Learning
In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables joint regularization of the E-SVMs without any additional cost over the original ensemble learning. The adva...
متن کاملGraphical Multi-Task Learning
We investigate multi-task learning in a setting where relationships between tasks are modeled by a graph structure. Most existing methods treat all pairs of tasks as being equally related, which can be hurt performance when the true structure of task relationships is more complex. Our method uses regularization to encourage models for task pairs to be similar whenever they are connected in the ...
متن کاملFederated Multi-Task Learning
Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i10.17125